Quantiles based Neighborhood Method of Classification
نویسندگان
چکیده
منابع مشابه
Method of Quantiles
The underlying idea of Reich, Fuentes, and Dunson (2009) to use the asymptotic normal approximation of the quantile regression estimator as a “substitute” likelihood can be regarded as a convenient dumbing-down of the Jeffrey’s idea elaborated by Lavine (1995) and Dunson and Taylor (2005). The obvious disadvantage of the original Jeffrey’s suggestion is that it is difficult to compute/update th...
متن کاملThe Method of Simulated Quantiles
We introduce an inference method based on quantiles matching, which is useful for situations where the density function does not have a closed form –but it is simple to simulate– and/or moments do not exist. Functions of theoretical quantiles, which depend on the parameters of the assumed probability law, are matched with sample quantiles, which depend on observations. Since the theoretical qua...
متن کاملPredicting Conditional Quantiles via Reduction to Classification
We show how to reduce the process of predicting conditional quantiles (and the median in particular) to solving classification. The accompanying theoretical statement shows that the regret of the classifier bounds the regret of the quantile regression under a quantile loss. We also test this reduction empirically against existing quantile regression methods on large real-world datasets and disc...
متن کاملConstrained Classification and Ranking via Quantiles
In most machine learning applications, classification accuracy is not the primary metric of interest. Binary classifiers which face class imbalance are often evaluated by the Fβ score, area under the precision-recall curve, Precision at K, and more. The maximization of many of these metrics can be expressed as a constrained optimization problem, where the constraint is a function of the classif...
متن کاملMeasures of classification complexity based on neighborhood model
It is useful to measure classification complexity for understanding classification tasks, selecting feature subsets and learning algorithms. In this work, we review some current measures of classification complexity and propose two new coefficients: neighborhood dependency (ND) and neighborhood decision error (NDEM). ND reflects the ratio of boundary samples over the whole sample set; while NDE...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Computational and Theoretical Statistics
سال: 2019
ISSN: 2384-4795
DOI: 10.12785/ijcts/060101